语义图像编辑利用本地语义标签图来生成所需的内容。最近的工作借用了Spade Block来实现语义图像编辑。但是,由于编辑区域和周围像素之间的样式差异,它无法产生令人愉悦的结果。我们将其归因于以下事实:Spade仅使用与图像无关的局部语义布局,但忽略了已知像素中包含的图像特定样式。为了解决此问题,我们提出了一个样式保存的调制(SPM),其中包括两个调制过程:第一个调制包含上下文样式和语义布局,然后生成两个融合的调制参数。第二次调制采用融合参数来调制特征图。通过使用这两种调制,SPM可以在保留特定图像的上下文样式的同时注入给定的语义布局。此外,我们设计了一种渐进式体系结构,以粗到精细的方式生成编辑的内容。提出的方法可以获得上下文一致的结果,并显着减轻生成区域和已知像素之间的不愉快边界。
translated by 谷歌翻译
数百万患者患有世界各地的罕见疾病。然而,罕见疾病的样品远小于常见疾病。此外,由于医疗数据的敏感性,医院通常不愿意分享患者信息,以引用隐私问题的数据融合。这些挑战使传统的AI模型难以提取疾病预测目的的稀有疾病特征。在本文中,我们通过提出基于联邦荟萃学习的稀有疾病预测的新方法来克服这种限制。为了提高稀有疾病的预测准确性,我们设计了一种基于关注的元学习(ATML)方法,根据基础学习者的测量培训效果,动态调整对不同任务的关注。另外,提出了一种基于动态权重的融合策略,以进一步提高联合学习的准确性,这基于每个本地模型的准确性动态选择客户端。实验表明,随着五次镜头,我们的方法以准确性和速度为原始联合元学习算法进行了出差。与每个医院的本地模型相比,所提出的模型的平均预测精度增加了13.28%。
translated by 谷歌翻译
跨块油水层(猫头鹰)鉴定对于石油发育至关重要。由于主要针对人类经验,传统方法受主观因素的影响很大。基于AI的方法促进了猫头鹰鉴定的发展。然而,由于横跨块和严重的长尾分布(阶级不平衡),现有人工智能(AI)模型的识别效应是有限的。在本文中,我们通过提出用于猫头鹰识别的动态融合的联合学习(FL)来解决这种限制。为了克服地质差异,我们向保险丝模型提出了一种动态加权策略并培训一般猫头鹰识别模型。此外,设计了基于F1评分的重加权方案,从理论上导出了新的损失函数来解决数据长尾问题。此外,提出了一种基于地质知识的掩模注意机制来增强模型特征提取。为了我们最好的知识,这是第一个使用FL识别猫头鹰的工作。我们使用来自油田和公共3W数据集的实际井测井数据集评估所提出的方法。实验结果表明,我们的方法显着出现了其他AI方法。
translated by 谷歌翻译
联合学习通过融合来自本地节点的协作模型来从分散的数据中学习。然而,FedAVG平均的传统基于坐标的模型忽略了每个参数编码的随机信息,并且可能遭受结构特征未对准。在这项工作中,我们提出了Fed2,一个功能对齐的联合学习框架来解决这个问题,通过在协作模型上建立一个坚定的结构特征对齐来解决这个问题。 FED2由两种主要设计组成:首先,我们设计了一个面向功能的模型结构适应方法,以确保不同神经网络结构中的显式功能分配。将结构适应应用于协作模型,可以在非常早期的训练阶段初始化具有类似特征信息的匹配结构。在联合学习过程中,我们提出了一个特征配对的平均方案,以保证对齐的特征分布,并在IID或非IID方案下维护没有特征融合冲突。最终,FED2可以在广泛的同源和异构环境下有效地提高联合学习收敛性能,提供出色的收敛速度,准确性和计算/通信效率。
translated by 谷歌翻译
实体对齐(EA)在学术界和工业中都引起了广泛的关注,该行业旨在寻求具有不同知识图(KGS)相同含义的实体。 KGS中的实体之间存在实质性的多步关系路径,表明实体的语义关系。但是,现有方法很少考虑路径信息,因为并非所有自然路径都促进EA判断。在本文中,我们提出了一个更有效的实体对齐框架RPR-RHGT,该框架集成了关系和路径结构信息以及KGS中的异质信息。令人印象深刻的是,开发了一种初始可靠的路径推理算法来生成有利于EA任务的路径,从KGS的关系结构中,这是文献中第一个成功使用无限制路径信息的算法。此外,为了有效地捕获实体社区中的异质特征,设计的异质图变压器旨在建模KGS的关系和路径结构。在三个著名数据集上进行的广泛实验表明,RPR-RHGT的表现明显优于11种最佳方法,超过了命中率@1的最佳性能基线最高8.62%。我们还表现出比基线在训练集的不同比率和更难数据集的基线上更好的性能。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译